55 research outputs found

    Bounded CCA2-Secure Non-Malleable Encryption

    Get PDF
    Under an adaptive chosen ciphertext attack (CCA2), the security of an encryption scheme must hold against adversaries that have access to a decryption oracle. We consider a weakening of CCA2 security, wherein security need only hold against adversaries making an a-priori bounded number of queries to the decryption oracle. Concerning this notion, which we call bounded-CCA2 security, we show the following two results. (1) Bounded-CCA2 secure non-malleable encryption schemes exist if and only if semantically-secure (IND-CPA-secure) encryption schemes exist.(As far as we know, bounded-CCA2 non-malleability is the strongest notion of security known to be satisfiable assuming only the existence of semantically-secure encryption schemes.) (2) In contrast to CCA2 security, bounded-CCA2 security alone does not imply non-malleability. In particular, if there exists an encryption scheme that is bounded-CCA2 secure, then there exists another encryption scheme which remains bounded-CCA2 secure, but is malleable under a simple chosen-plaintext attack

    Scaling ORAM for Secure Computation

    Get PDF
    We design and implement a Distributed Oblivious Random Access Memory (ORAM) data structure that is optimized for use in two-party secure computation protocols. We improve upon the access time of previous constructions by a factor of up to ten, their memory overhead by a factor of one hundred or more, and their initialization time by a factor of thousands. We are able to instantiate ORAMs that hold 2342^{34} bytes, and perform operations on them in seconds, which was not previously feasible with any implemented scheme. Unlike prior ORAM constructions based on hierarchical hashing, permutation, or trees, our Distributed ORAM is derived from the new Function Secret Sharing scheme introduced by Boyle, Gilboa and Ishai. This significantly reduces the amount of secure computation required to implement an ORAM access, albeit at the cost of O(n)O(n) efficient local memory operations. We implement our construction and find that, despite its poor O(n)O(n) asymptotic complexity, it still outperforms the fastest previously known constructions, Circuit ORAM and Square-root ORAM, for datasets that are 32 KiB or larger, and outperforms prior work on applications such as stable matching or binary search by factors of two to ten

    Improved Straight-Line Extraction in the Random Oracle Model With Applications to Signature Aggregation

    Get PDF
    The goal of this paper is to improve the efficiency and applicability of straightline extraction techniques in the random oracle model. Straightline extraction in the random oracle model refers to the existence of an extractor, which given the random oracle queries made by a prover Pāˆ—(x)P^*(x) on some theorem xx, is able to produce a witness ww for xx with roughly the same probability that Pāˆ—P^* produces a verifying proof. This notion applies to both zero-knowledge protocols and verifiable computation where the goal is compressing a proof. Pass (CRYPTO \u2703) first showed how to achieve this property for NP using a cut-and-choose technique which incurred a Ī»2\lambda^2-bit overhead in communication where Ī»\lambda is a security parameter. Fischlin (CRYPTO \u2705) presented a more efficient technique based on ``proofs of work\u27\u27 that sheds this Ī»2\lambda^2 cost, but only applies to a limited class of Sigma Protocols with a ``quasi-unique response\u27\u27 property, which for example, does not necessarily include the standard OR composition for Sigma protocols. With Schnorr/EdDSA signature aggregation as a motivating application, we develop new techniques to improve the computation cost of straight-line extractable proofs. Our improvements to the state of the art range from 70X--200X for the best compression parameters. This is due to a uniquely suited polynomial evaluation algorithm, and the insight that a proof-of-work that relies on multicollisions and the birthday paradox is faster to solve than inverting a fixed target. Our collision based proof-of-work more generally improves the Prover\u27s random oracle query complexity when applied in the NIZK setting as well. In addition to reducing the query complexity of Fischlin\u27s Prover, for a special class of Sigma protocols we can for the first time closely match a new lower bound we present. Finally we extend Fischlin\u27s technique so that it applies to a more general class of strongly-sound Sigma protocols, which includes the OR composition. We achieve this by carefully randomizing Fischlin\u27s technique---we show that its current deterministic nature prevents its application to certain multi-witness languages

    Secure Stable Matching at Scale

    Get PDF
    When a group of individuals and organizations wish to compute a stable matching---for example, when medical students are matched to medical residency programs---they often outsource the computation to a trusted arbiter in order to preserve the privacy of participants\u27 preferences. Secure multi-party computation offers the possibility of private matching processes that do not rely on any common trusted third party. However, stable matching algorithms have previously been considered infeasible for execution in a secure multi-party context on non-trivial inputs because they are computationally intensive and involve complex data-dependent memory access patterns. We adapt the classic Gale-Shapley algorithm for use in such a context, and show experimentally that our modifications yield a lower asymptotic complexity and more than an order of magnitude in practical cost improvement over previous techniques. Our main improvements stem from designing new oblivious data structures that exploit the properties of the matching algorithms. We apply a similar strategy to scale the Roth-Peranson instability chaining algorithm, currently in use by the National Resident Matching Program. The resulting protocol is efficient enough to be useful at the scale required for matching medical residents nationwide, taking just over 18 hours to complete an execution simulating the 2016 national resident match with more than 35,000 participants and 30,000 residency slots

    A Better Method to Analyze Blockchain Consistency

    Get PDF
    The celebrated Nakamoto consensus protocol ushered in several new consensus applications including cryptocurrencies. A few recent works have analyzed important properties of blockchains, including most significantly, consistency, which is a guarantee that all honest parties output the same sequence of blocks throughout the execution of the protocol. To establish consistency, the prior analysis of Pass, Seeman and shelat required a careful counting of certain combinatorial events that was difficult to apply to variations of Nakamoto. The work of Garay, Kiayas, and Leonardas provides another method of analyzing the blockchain under both a synchronous and partially synchronous setting. The contribution of this paper is the development of a simple Markov-chain based method for analyzing consistency properties of blockchain protocols. The method includes a formal way of stating strong concentration bounds as well as easy ways to concretely compute the bounds. We use our new method to answer a number of basic questions about consistency of blockchains: ā€¢ Our new analysis provides a tighter guarantee on the consistency property of Nakamotoā€™s protocol, including for parameter regimes which previous work could not consider; ā€¢ We analyze a family of delaying attacks and extend them to other protocols; ā€¢ We analyze how long a participant should wait before considering a high-value transaction ā€œconfirmedā€; ā€¢ We analyze the consistency of CliqueChain, a variation of the Chainweb system; ā€¢ We provide the first rigorous consistency analysis of GHOST under the partially synchronous setting and also analyze a folklore balancing -attack. In each case, we use our framework to experimentally analyze the consensus bounds for various network delay parameters and adversarial computing percentages. We hope our techniques enable authors of future blockchain proposals to provide a more rigorous analysis of their schemes

    Analysis of the Blockchain Protocol in Asynchronous Networks

    Get PDF
    Nakamoto\u27s famous blockchain protocol enables achieving consensus in a so-called \emph{permissionless setting}---anyone can join (or leave) the protocol execution, and the protocol instructions do not depend on the identities of the players. His ingenious protocol prevents ``sybil attacks\u27\u27 (where an adversary spawns any number of new players) by relying on computational puzzles (a.k.a. ``moderately hard functions\u27\u27) introduced by Dwork and Naor (Crypto\u2792). The analysis of the blockchain consensus protocol (a.k.a. Nakamoto consensus) has been a notoriously difficult task. Prior works that analyze it either make the simplifying assumption that network channels are fully synchronous (i.e. messages are instantly delivered without delays) (Garay et al, Eurocrypt\u2715) or only consider specific attacks (Nakamoto\u2708; Sampolinsky and Zohar, FinancialCrypt\u2715); additionally, as far as we know, none of them deal with players joining or leaving the protocol. In this work we prove that the blockchain consensus mechanism satisfies a strong forms of *consistency* and *liveness* in an asynchronous network with adversarial delays that are a-priori bounded, within a formal model allowing for adaptive corruption and spawning of new players, assuming that the computational puzzle is modeled as a random oracle. (We complement this result by showing a simple attack against the blockchain protocol in a fully asynchronous setting, showing that the ā€œpuzzle-hardnessā€ needs to be appropriately set as a function of the maximum network delay; this attack applies even for static corruption.) As an independent contribution, we define an abstract notion of a blockchain protocol and identify appropriate security properties of such protocols; we prove that Nakamoto\u27s blockchain protocol satisfies them and that these properties are sufficient for typical applications; we hope that this abstraction may simplify further applications of blockchains

    Threshold Fully Homomorphic Encryption and Secure Computation

    Get PDF
    Cramer, DamgĆ„rd, and Nielsen~\cite{CDN01} show how to construct an efficient secure multi-party computation scheme using a threshold homomorphic encryption scheme that has four properties i) a honest-verifier zero-knowledge proof of knowledge of encrypted values, ii) proving multiplications correct iii) threshold decryption and iv) trusted shared key setup. Naor and Nissim~\cite{NN01a} show how to construct secure multi-party protocols for a function ff whose communication is proportional to the communication required to evaluate ff without security, albeit at the cost of computation that might be exponential in the description of ff. Gentry~\cite{Gen09a} shows how to combine both ideas with fully homomorphic encryption in order to construct secure multi-party protocol that allows evaluation of a function ff using communication that is {\bf independent of the circuit description of ff} and computation that is polynomial in āˆ£fāˆ£|f|. This paper addresses the major drawback\u27s of Gentry\u27s approach: we eliminate the use of non-black box methods that are inherent in Naor and Nissim\u27s compiler. To do this we show how to modify the fully homomorphic encryption construction of van Dijk et al.~\cite{vDGHV10} to be threshold fully homomorphic encryption schemes. We directly construct (information theoretically) secure protocols for sharing the secret key for our threshold scheme (thereby removing the setup assumptions) and for jointly decrypting a bit. All of these constructions are constant round and we thoroughly analyze their complexity; they address requirements (iii) and (iv). The fact that the encryption scheme is fully homomorphic addresses requirement (ii). To address the need for an honest-verifier zero-knowledge proof of knowledge of encrypted values, we instead argue that a weaker solution suffices. We provide a 2-round blackbox protocol that allows us to prove knowledge of encrypted bits. Our protocol is not zero-knowledge, but it provably does not release any information about the bit being discussed, and this is sufficient to prove the correctness of a simulation in a method similar to Cramer et al. Altogether, \emph{we construct the first black-box secure multi-party computation protocol that allows evaluation of a function ff using communication that is independent of the circuit description of ff}

    Adaptively Secure MPC with Sublinear Communication Complexity

    Get PDF
    A central challenge in the study of MPC is to balance between security guarantees, hardness assumptions, and resources required for the protocol. In this work, we study the cost of tolerating adaptive corruptions in MPC protocols under various corruption thresholds. In the strongest setting, we consider adaptive corruptions of an arbitrary number of parties (potentially all) and achieve the following results: (1) A two-round secure function evaluation (SFE) protocol in the CRS model, assuming LWE and indistinguishability obfuscation (iO). The communication, the CRS size, and the online-computation are sublinear in the size of the function. The iO assumption can be replaced by secure erasures. Previous results required either the communication or the CRS size to be polynomial in the function size. (2) Under the same assumptions, we construct a Bob-optimized 2PC (where Alice talks first, Bob second, and Alice learns the output). That is, the communication complexity and total computation of Bob are sublinear in the function size and in Alice\u27s input size. We prove impossibility of Alice-optimized protocols. (3) Assuming LWE, we bootstrap adaptively secure NIZK arguments to achieve proof size sublinear in the circuit size of the NP-relation. On a technical level, our results are based on laconic function evaluation (LFE) (Quach, Wee, and Wichs, FOCS\u2718) and shed light on an interesting duality between LFE and FHE. Next, we analyze adaptive corruptions of all-but-one of the parties and show a two-round SFE protocol in the threshold-PKI model (where keys of a threshold FHE scheme are pre-shared among the parties) with communication complexity sublinear in the circuit size, assuming LWE and NIZK. Finally, we consider the honest-majority setting, and show a two-round SFE protocol with guaranteed output delivery under the same constraints. Our results highlight that the asymptotic cost of adaptive security can be reduced to be comparable to, and in many settings almost match, that of static security, with only a little sacrifice to the concrete round complexity and asymptotic communication complexity

    Bounded KDM Security from iO and OWF

    Get PDF
    To date, all constructions in the standard model (i.e., without random oracles) of Bounded Key-Dependent Message (KDM) secure (or even just circularly-secure) encryption schemes rely on specific assumptions (LWE, DDH, QR or DCR); all of these assumptions are known to imply the existence of collision-resistant hash functions. In this work, we demonstrate the existence of bounded KDM secure encryption assuming indistinguishability obfsucation for P/polyP/poly and just one-way functions. Relying on the recent result of Asharov and Segev (STOC\u2715), this yields the first construction of a Bounded KDM secure (or even circularly secure) encryption scheme from an assumption that provably does not imply collision-resistant hash functions w.r.t. black-box constructions. Combining this with prior constructions, we show how to augment this Bounded KDM scheme into a Bounded CCA2-KDM scheme

    Billion-Gate Secure Computation with Malicious Adversaries

    Get PDF
    The goal of this paper is to assess the feasibility of two-party secure computation in the presence of a malicious adversary. Prior work has shown the feasibility of billion-gate circuits in the semi-honest model, but only the 35k-gate AES circuit in the malicious model, in part because security in the malicious model is much harder to achieve. We show that by incorporating the best known techniques and parallelizing almost all steps of the resulting protocol, evaluating billion-gate circuits is feasible in the malicious model. Our results are in the standard model (i.e., no common reference strings or PKIs) and, in contrast to prior work, we do not use the random oracle model which has well-established theoretical shortcomings
    • ā€¦
    corecore